An Online Gradient Method with Smoothing L_0 Regularization for Pi-Sigma Network

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update

A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in tra...

متن کامل

Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms

This paper investigates an online gradient method with innerpenalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higherorder networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performan...

متن کامل

Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty

Minimization of the training regularization term has been recognized as an important objective for sparse modeling and generalization in feedforward neural networks. Most of the studies so far have been focused on the popular L2 regularization penalty. In this paper, we consider the convergence of online gradient method with smoothing L1/2 regularization term. For normal L1/2 regularization, th...

متن کامل

Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks

The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes os...

متن کامل

An Iterative Conjugate Gradient Regularization Method for Image Restoration

Image restoration is an ill-posed inverse problem, which has been introduced the regularization method to suppress over-amplification. In this paper, we propose to apply the iterative regularization method to the image restoration problem and present a nested iterative method, called iterative conjugate gradient regularization method. Convergence properties are established in detail. Based on [...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Transactions on Machine Learning and Artificial Intelligence

سال: 2018

ISSN: 2054-7390

DOI: 10.14738/tmlai.66.5838